Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc
    Favicon for Google AI Studio

    Google AI Studio

    Browse models provided by Google AI Studio (Terms of Service)

    19 models

    Tokens processed on OpenRouter

    • Google: Nano Banana Pro (Gemini 3 Pro Image Preview)Nano Banana Pro (Gemini 3 Pro Image Preview)

      Nano Banana Pro is Google’s most advanced image-generation and editing model, built on Gemini 3 Pro. It extends the original Nano Banana with significantly improved multimodal reasoning, real-world grounding, and high-fidelity visual synthesis. The model generates context-rich graphics, from infographics and diagrams to cinematic composites, and can incorporate real-time information via Search grounding. It offers industry-leading text rendering in images (including long passages and multilingual layouts), consistent multi-image blending, and accurate identity preservation across up to five subjects. Nano Banana Pro adds fine-grained creative controls such as localized edits, lighting and focus adjustments, camera transformations, and support for 2K/4K outputs and flexible aspect ratios. It is designed for professional-grade design, product visualization, storyboarding, and complex multi-element compositions while remaining efficient for general image creation workflows.

    by google66K context$2/M input tokens$12/M output tokens$120/M tokens
  3. Google: Gemini 3 Pro PreviewGemini 3 Pro Preview

    Gemini 3 Pro is Google’s flagship frontier model for high-precision multimodal reasoning, combining strong performance across text, image, video, audio, and code with a 1M-token context window. Reasoning Details must be preserved when using multi-turn tool calling, see our docs here: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks. It delivers state-of-the-art benchmark results in general reasoning, STEM problem solving, factual QA, and multimodal understanding, including leading scores on LMArena, GPQA Diamond, MathArena Apex, MMMU-Pro, and Video-MMMU. Interactions emphasize depth and interpretability: the model is designed to infer intent with minimal prompting and produce direct, insight-focused responses. Built for advanced development and agentic workflows, Gemini 3 Pro provides robust tool-calling, long-horizon planning stability, and strong zero-shot generation for complex UI, visualization, and coding tasks. It excels at agentic coding (SWE-Bench Verified, Terminal-Bench 2.0), multimodal analysis, and structured long-form tasks such as research synthesis, planning, and interactive learning experiences. Suitable applications include autonomous agents, coding assistants, multimodal analytics, scientific reasoning, and high-context information processing.

    by google1.05M context$2/M input tokens$12/M output tokens
  4. Google: Gemini Embedding 001Gemini Embedding 001

    gemini-embedding-001 provides a unified cutting edge experience across domains, including science, legal, finance, and coding. This embedding model has consistently held a top spot on the Massive Text Embedding Benchmark (MTEB) Multilingual leaderboard since the experimental launch in March.

    by google20K context$0.15/M input tokens$0/M output tokens
  5. Google: Gemini 2.5 Flash Image (Nano Banana)Gemini 2.5 Flash Image (Nano Banana)

    Gemini 2.5 Flash Image, a.k.a. "Nano Banana," is now generally available. It is a state of the art image generation model with contextual understanding. It is capable of image generation, edits, and multi-turn conversations. Aspect ratios can be controlled with the image_config API Parameter

    by google33K context$0.30/M input tokens$2.50/M output tokens$30/M tokens
  6. Google: Gemini 2.5 Flash Preview 09-2025Gemini 2.5 Flash Preview 09-2025

    Gemini 2.5 Flash Preview September 2025 Checkpoint is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation (https://openrouter.ai/docs/use-cases/reasoning-tokens#max-tokens-for-reasoning).

    by google1.05M context$0.30/M input tokens$2.50/M output tokens$1/M audio tokens
  7. Google: Gemini 2.5 Flash Lite Preview 09-2025Gemini 2.5 Flash Lite Preview 09-2025

    Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the Reasoning API parameter to selectively trade off cost for intelligence.

    by google1.05M context$0.10/M input tokens$0.40/M output tokens
  8. Google: Gemini 2.5 Flash Image Preview (Nano Banana)Gemini 2.5 Flash Image Preview (Nano Banana)

    Gemini 2.5 Flash Image Preview, a.k.a. "Nano Banana," is a state of the art image generation model with contextual understanding. It is capable of image generation, edits, and multi-turn conversations.

    by google33K context$0.30/M input tokens$2.50/M output tokens$30/M tokens
  9. Google: Gemini 2.5 Flash LiteGemini 2.5 Flash Lite

    Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the Reasoning API parameter to selectively trade off cost for intelligence.

    by google1.05M context$0.10/M input tokens$0.40/M output tokens$0.30/M audio tokens
  10. Google: Gemma 3n 2BGemma 3n 2BFree variant

    Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.

    by google8K context$0/M input tokens$0/M output tokens
  11. Google: Gemini 2.5 FlashGemini 2.5 Flash

    Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation (https://openrouter.ai/docs/use-cases/reasoning-tokens#max-tokens-for-reasoning).

    by google1.05M context$0.30/M input tokens$2.50/M output tokens$1/M audio tokens
  12. Google: Gemini 2.5 ProGemini 2.5 Pro

    Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities.

    by google1.05M context$1.25/M input tokens$10/M output tokens$2.50/M audio tokens
  13. Google: Gemma 3n 4BGemma 3n 4BFree variant

    Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements. This model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. Read more in the blog post

    by google32K context$0/M input tokens$0/M output tokens
  14. Google: Gemini 2.5 Pro Preview 05-06Gemini 2.5 Pro Preview 05-06

    Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities.

    by google1.05M context$1.25/M input tokens$10/M output tokens
  15. Google: Gemma 3 4BGemma 3 4BFree variant

    Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling.

    by google131K context$0/M input tokens$0/M output tokens
  16. Google: Gemma 3 12BGemma 3 12BFree variant

    Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 12B is the second largest in the family of Gemma 3 models after Gemma 3 27B

    by google131K context$0/M input tokens$0/M output tokens
  17. Google: Gemma 3 27BGemma 3 27BFree variant

    Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 27B is Google's latest open source model, successor to Gemma 2

    by google131K context$0/M input tokens$0/M output tokens
  18. Google: Gemini 2.0 Flash LiteGemini 2.0 Flash Lite

    Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5, all at extremely economical token prices.

    by google1.05M context$0.075/M input tokens$0.30/M output tokens$0.075/M audio tokens
  19. Google: Gemini 2.0 FlashGemini 2.0 Flash

    Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5. It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences.

    by google1M context$0.10/M input tokens$0.40/M output tokens$0.70/M audio tokens
  20. Google: Gemini 2.0 Flash ExperimentalGemini 2.0 Flash ExperimentalFree variant

    Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5. It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences.

    by google1.05M context$0/M input tokens$0/M output tokens