Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc
    Favicon for baidu

    baidu

    Browse models from baidu

    5 models

    Tokens processed on OpenRouter

    • Baidu: ERNIE 4.5 21B A3B ThinkingERNIE 4.5 21B A3B Thinking
      1.8M tokens

      ERNIE-4.5-21B-A3B-Thinking is Baidu's upgraded lightweight MoE model, refined to boost reasoning depth and quality for top-tier performance in logical puzzles, math, science, coding, text generation, and expert-level academic benchmarks.

      by baidu131K context$0.07/M input tokens$0.28/M output tokens
  3. Baidu: ERNIE 4.5 21B A3BERNIE 4.5 21B A3B
    3.66M tokens

    A sophisticated text-based Mixture-of-Experts (MoE) model featuring 21B total parameters with 3B activated per token, delivering exceptional multimodal understanding and generation through heterogeneous MoE structures and modality-isolated routing. Supporting an extensive 131K token context length, the model achieves efficient inference via multi-expert parallel collaboration and quantization, while advanced post-training techniques including SFT, DPO, and UPO ensure optimized performance across diverse applications with specialized routing and balancing losses for superior task handling.

    by baidu131K context$0.07/M input tokens$0.28/M output tokens
  4. Baidu: ERNIE 4.5 VL 28B A3BERNIE 4.5 VL 28B A3B
    5.27M tokens

    A powerful multimodal Mixture-of-Experts chat model featuring 28B total parameters with 3B activated per token, delivering exceptional text and vision understanding through its innovative heterogeneous MoE structure with modality-isolated routing. Built with scaling-efficient infrastructure for high-throughput training and inference, the model leverages advanced post-training techniques including SFT, DPO, and UPO for optimized performance, while supporting an impressive 131K context length and RLVR alignment for superior cross-modal reasoning and generation capabilities.

    by baidu131K context$0.14/M input tokens$0.56/M output tokens
  5. Baidu: ERNIE 4.5 VL 424B A47B ERNIE 4.5 VL 424B A47B
    957K tokens

    ERNIE-4.5-VL-424B-A47B is a multimodal Mixture-of-Experts (MoE) model from Baidu’s ERNIE 4.5 series, featuring 424B total parameters with 47B active per token. It is trained jointly on text and image data using a heterogeneous MoE architecture and modality-isolated routing to enable high-fidelity cross-modal reasoning, image understanding, and long-context generation (up to 131k tokens). Fine-tuned with techniques like SFT, DPO, UPO, and RLVR, this model supports both “thinking” and non-thinking inference modes. Designed for vision-language tasks in English and Chinese, it is optimized for efficient scaling and can operate under 4-bit/8-bit quantization.

    by baidu131K context$0.42/M input tokens$1.25/M output tokens
  6. Baidu: ERNIE 4.5 300B A47B ERNIE 4.5 300B A47B
    37.2M tokens

    ERNIE-4.5-300B-A47B is a 300B parameter Mixture-of-Experts (MoE) language model developed by Baidu as part of the ERNIE 4.5 series. It activates 47B parameters per token and supports text generation in both English and Chinese. Optimized for high-throughput inference and efficient scaling, it uses a heterogeneous MoE structure with advanced routing and quantization strategies, including FP8 and 2-bit formats. This version is fine-tuned for language-only tasks and supports reasoning, tool parameters, and extended context lengths up to 131k tokens. Suitable for general-purpose LLM applications with high reasoning and throughput demands.

    by baidu131K context$0.28/M input tokens$1.10/M output tokens